An accelerated variance reducing stochastic method with Douglas-Rachford splitting
نویسندگان
چکیده
منابع مشابه
Online Douglas-Rachford splitting method
Online and stochastic learning has emerged as powerful tool in large scale optimization. In this work, we generalize the Douglas-Rachford splitting (DRs) method for minimizing composite functions to online and stochastic settings (to our best knowledge this is the first time DRs been generalized to sequential version). We first establish an O(1/ √ T ) regret bound for batch DRs method. Then we ...
متن کاملStochastic Forward Douglas-Rachford Splitting for Monotone Inclusions
We propose a stochastic Forward Douglas-Rachford Splitting framework for finding a zero point of the sum of three maximally monotone operators in real separable Hilbert space, where one of them is cocoercive. We first prove the weak almost sure convergence of the proposed method. We then characterize the rate of convergence in expectation in the case of strongly monotone operators. Finally, we ...
متن کاملStochastic Strictly Contractive Peaceman-Rachford Splitting Method
In this paper, we propose a couple of new Stochastic Strictly Contractive PeacemanRachford Splitting Method (SCPRSM), called Stochastic SCPRSM (SS-PRSM) and Stochastic Conjugate Gradient SCPRSM (SCG-PRSM) for large-scale optimization problems. The two types of Stochastic PRSM algorithms respectively incorporate stochastic variance reduced gradient (SVRG) and conjugate gradient method. Stochasti...
متن کاملOn convergence rate of the Douglas-Rachford operator splitting method
This note provides a simple proof on a O(1/k) convergence rate for the DouglasRachford operator splitting method where k denotes the iteration counter.
متن کاملDouglas-Rachford splitting for nonconvex feasibility problems
We adapt the Douglas-Rachford (DR) splitting method to solve nonconvex feasibility problems by studying this method for a class of nonconvex optimization problem. While the convergence properties of the method for convex problems have been well studied, far less is known in the nonconvex setting. In this paper, for the direct adaptation of the method to minimize the sum of a proper closed funct...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Machine Learning
سال: 2019
ISSN: 0885-6125,1573-0565
DOI: 10.1007/s10994-019-05785-3